Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
1.
IEEE Trans Med Imaging ; 42(12): 3764-3778, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37610903

RESUMO

Convolutional neural networks (CNNs) are a promising technique for automated glaucoma diagnosis from images of the fundus, and these images are routinely acquired as part of an ophthalmic exam. Nevertheless, CNNs typically require a large amount of well-labeled data for training, which may not be available in many biomedical image classification applications, especially when diseases are rare and where labeling by experts is costly. This article makes two contributions to address this issue: 1) It extends the conventional Siamese network and introduces a training method for low-shot learning when labeled data are limited and imbalanced, and 2) it introduces a novel semi-supervised learning strategy that uses additional unlabeled training data to achieve greater accuracy. Our proposed multi-task Siamese network (MTSN) can employ any backbone CNN, and we demonstrate with four backbone CNNs that its accuracy with limited training data approaches the accuracy of backbone CNNs trained with a dataset that is 50 times larger. We also introduce One-Vote Veto (OVV) self-training, a semi-supervised learning strategy that is designed specifically for MTSNs. By taking both self-predictions and contrastive predictions of the unlabeled training data into account, OVV self-training provides additional pseudo labels for fine-tuning a pre-trained MTSN. Using a large (imbalanced) dataset with 66,715 fundus photographs acquired over 15 years, extensive experimental results demonstrate the effectiveness of low-shot learning with MTSN and semi-supervised learning with OVV self-training. Three additional, smaller clinical datasets of fundus images acquired under different conditions (cameras, instruments, locations, populations) are used to demonstrate the generalizability of the proposed methods.


Assuntos
Glaucoma , Humanos , Glaucoma/diagnóstico por imagem , Fundo de Olho , Redes Neurais de Computação , Aprendizado de Máquina Supervisionado
2.
Ophthalmol Sci ; 3(1): 100233, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36545260

RESUMO

Purpose: To compare the diagnostic accuracy and explainability of a Vision Transformer deep learning technique, Data-efficient image Transformer (DeiT), and ResNet-50, trained on fundus photographs from the Ocular Hypertension Treatment Study (OHTS) to detect primary open-angle glaucoma (POAG) and identify the salient areas of the photographs most important for each model's decision-making process. Design: Evaluation of a diagnostic technology. Subjects Participants and Controls: Overall 66 715 photographs from 1636 OHTS participants and an additional 5 external datasets of 16 137 photographs of healthy and glaucoma eyes. Methods: Data-efficient image Transformer models were trained to detect 5 ground-truth OHTS POAG classifications: OHTS end point committee POAG determinations because of disc changes (model 1), visual field (VF) changes (model 2), or either disc or VF changes (model 3) and Reading Center determinations based on disc (model 4) and VFs (model 5). The best-performing DeiT models were compared with ResNet-50 models on OHTS and 5 external datasets. Main Outcome Measures: Diagnostic performance was compared using areas under the receiver operating characteristic curve (AUROC) and sensitivities at fixed specificities. The explainability of the DeiT and ResNet-50 models was compared by evaluating the attention maps derived directly from DeiT to 3 gradient-weighted class activation map strategies. Results: Compared with our best-performing ResNet-50 models, the DeiT models demonstrated similar performance on the OHTS test sets for all 5 ground-truth POAG labels; AUROC ranged from 0.82 (model 5) to 0.91 (model 1). Data-efficient image Transformer AUROC was consistently higher than ResNet-50 on the 5 external datasets. For example, AUROC for the main OHTS end point (model 3) was between 0.08 and 0.20 higher in the DeiT than ResNet-50 models. The saliency maps from the DeiT highlight localized areas of the neuroretinal rim, suggesting important rim features for classification. The same maps in the ResNet-50 models show a more diffuse, generalized distribution around the optic disc. Conclusions: Vision Transformers have the potential to improve generalizability and explainability in deep learning models, detecting eye disease and possibly other medical conditions that rely on imaging for clinical diagnosis and management.

3.
JAMA Ophthalmol ; 140(4): 383-391, 2022 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-35297959

RESUMO

Importance: Automated deep learning (DL) analyses of fundus photographs potentially can reduce the cost and improve the efficiency of reading center assessment of end points in clinical trials. Objective: To investigate the diagnostic accuracy of DL algorithms trained on fundus photographs from the Ocular Hypertension Treatment Study (OHTS) to detect primary open-angle glaucoma (POAG). Design, Setting, and Participants: In this diagnostic study, 1636 OHTS participants from 22 sites with a mean (range) follow-up of 10.7 (0-14.3) years. A total of 66 715 photographs from 3272 eyes were used to train and test a ResNet-50 model to detect the OHTS Endpoint Committee POAG determination based on optic disc (287 eyes, 3502 photographs) and/or visual field (198 eyes, 2300 visual fields) changes. Three independent test sets were used to evaluate the generalizability of the model. Main Outcomes and Measures: Areas under the receiver operating characteristic curve (AUROC) and sensitivities at fixed specificities were calculated to compare model performance. Evaluation of false-positive rates was used to determine whether the DL model detected POAG before the OHTS Endpoint Committee POAG determination. Results: A total of 1147 participants were included in the training set (661 [57.6%] female; mean age, 57.2 years; 95% CI, 56.6-57.8), 167 in the validation set (97 [58.1%] female; mean age, 57.1 years; 95% CI, 55.6-58.7), and 322 in the test set (173 [53.7%] female; mean age, 57.2 years; 95% CI, 56.1-58.2). The DL model achieved an AUROC of 0.88 (95% CI, 0.82-0.92) for the OHTS Endpoint Committee determination of optic disc or VF changes. For the OHTS end points based on optic disc changes or visual field changes, AUROCs were 0.91 (95% CI, 0.88-0.94) and 0.86 (95% CI, 0.76-0.93), respectively. False-positive rates (at 90% specificity) were higher in photographs of eyes that later developed POAG by disc or visual field (27.5% [56 of 204]) compared with eyes that did not develop POAG (11.4% [50 of 440]) during follow-up. The diagnostic accuracy of the DL model developed on the optic disc end point applied to 3 independent data sets was lower, with AUROCs ranging from 0.74 (95% CI, 0.70-0.77) to 0.79 (95% CI, 0.78-0.81). Conclusions and Relevance: The model's high diagnostic accuracy using OHTS photographs suggests that DL has the potential to standardize and automate POAG determination for clinical trials and management. In addition, the higher false-positive rate in early photographs of eyes that later developed POAG suggests that DL models detected POAG in some eyes earlier than the OHTS Endpoint Committee, reflecting the OHTS design that emphasized a high specificity for POAG determination by requiring a clinically significant change from baseline.


Assuntos
Aprendizado Profundo , Glaucoma de Ângulo Aberto , Glaucoma , Hipertensão Ocular , Doenças do Nervo Óptico , Feminino , Glaucoma/diagnóstico , Humanos , Pressão Intraocular , Masculino , Pessoa de Meia-Idade , Hipertensão Ocular/diagnóstico , Hipertensão Ocular/tratamento farmacológico , Doenças do Nervo Óptico/diagnóstico , Testes de Campo Visual
4.
Ecol Evol ; 7(5): 1339-1353, 2017 03.
Artigo em Inglês | MEDLINE | ID: mdl-28261447

RESUMO

Massive coral bleaching events associated with high sea surface temperatures are forecast to become more frequent and severe in the future due to climate change. Monitoring colony recovery from bleaching disturbances over multiyear time frames is important for improving predictions of future coral community changes. However, there are currently few multiyear studies describing long-term outcomes for coral colonies following acute bleaching events. We recorded colony pigmentation and size for bleached and unbleached groups of co-located conspecifics of three major reef-building scleractinian corals (Orbicella franksi, Siderastrea siderea, and Stephanocoenia michelini; n = 198 total) in Bocas del Toro, Panama, during the major 2005 bleaching event and then monitored pigmentation status and changes live tissue colony size for 8 years (2005-2013). Corals that were bleached in 2005 demonstrated markedly different response trajectories compared to unbleached colony groups, with extensive live tissue loss for bleached corals of all species following bleaching, with mean live tissue losses per colony 9 months postbleaching of 26.2% (±5.4 SE) for O. franksi, 35.7% (±4.7 SE) for S. michelini, and 11.2% (±3.9 SE) for S. siderea. Two species, O. franksi and S. michelini, later recovered to net positive growth, which continued until a second thermal stress event in 2010. Following this event, all species again lost tissue, with previously unbleached colony species groups experiencing greater declines than conspecific sample groups, which were previously bleached, indicating a possible positive acclimative response. However, despite this beneficial effect for previously bleached corals, all groups experienced substantial net tissue loss between 2005 and 2013, indicating that many important Caribbean reef-building corals will likely suffer continued tissue loss and may be unable to maintain current benthic coverage when faced with future thermal stress forecast for the region, even with potential benefits from bleaching-related acclimation.

5.
IEEE Trans Pattern Anal Mach Intell ; 39(9): 1880-1891, 2017 09.
Artigo em Inglês | MEDLINE | ID: mdl-28114056

RESUMO

Photometric stereo is widely used for 3D reconstruction. However, its use in scattering media such as water, biological tissue and fog has been limited until now, because of forward scattered light from both the source and object, as well as light scattered back from the medium (backscatter). Here we make three contributions to address the key modes of light propagation, under the common single scattering assumption for dilute media. First, we show through extensive simulations that single-scattered light from a source can be approximated by a point light source with a single direction. This alleviates the need to handle light source blur explicitly. Next, we model the blur due to scattering of light from the object. We measure the object point-spread function and introduce a simple deconvolution method. Finally, we show how imaging fluorescence emission where available, eliminates the backscatter component and increases the signal-to-noise ratio. Experimental results in a water tank, with different concentrations of scattering media added, show that deconvolution produces higher-quality 3D reconstructions than previous techniques, and that when combined with fluorescence, can produce results similar to that in clear water even for highly turbid media.

6.
Sci Rep ; 6: 23166, 2016 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-27021133

RESUMO

Large-scale imaging techniques are used increasingly for ecological surveys. However, manual analysis can be prohibitively expensive, creating a bottleneck between collected images and desired data-products. This bottleneck is particularly severe for benthic surveys, where millions of images are obtained each year. Recent automated annotation methods may provide a solution, but reflectance images do not always contain sufficient information for adequate classification accuracy. In this work, the FluorIS, a low-cost modified consumer camera, was used to capture wide-band wide-field-of-view fluorescence images during a field deployment in Eilat, Israel. The fluorescence images were registered with standard reflectance images, and an automated annotation method based on convolutional neural networks was developed. Our results demonstrate a 22% reduction of classification error-rate when using both images types compared to only using reflectance images. The improvements were large, in particular, for coral reef genera Platygyra, Acropora and Millepora, where classification recall improved by 38%, 33%, and 41%, respectively. We conclude that convolutional neural networks can be used to combine reflectance and fluorescence imagery in order to significantly improve automated annotation accuracy and reduce the manual annotation bottleneck.


Assuntos
Organismos Aquáticos/crescimento & desenvolvimento , Automação/métodos , Ecossistema , Fluorescência , Fotografação/métodos , Algoritmos , Animais , Antozoários/classificação , Antozoários/crescimento & desenvolvimento , Organismos Aquáticos/classificação , Automação/instrumentação , Recifes de Corais , Humanos , Processamento de Imagem Assistida por Computador/instrumentação , Processamento de Imagem Assistida por Computador/métodos , Oceano Índico , Israel , Redes Neurais de Computação , Fotografação/instrumentação , Reprodutibilidade dos Testes
7.
PLoS One ; 10(7): e0130312, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26154157

RESUMO

Global climate change and other anthropogenic stressors have heightened the need to rapidly characterize ecological changes in marine benthic communities across large scales. Digital photography enables rapid collection of survey images to meet this need, but the subsequent image annotation is typically a time consuming, manual task. We investigated the feasibility of using automated point-annotation to expedite cover estimation of the 17 dominant benthic categories from survey-images captured at four Pacific coral reefs. Inter- and intra- annotator variability among six human experts was quantified and compared to semi- and fully- automated annotation methods, which are made available at coralnet.ucsd.edu. Our results indicate high expert agreement for identification of coral genera, but lower agreement for algal functional groups, in particular between turf algae and crustose coralline algae. This indicates the need for unequivocal definitions of algal groups, careful training of multiple annotators, and enhanced imaging technology. Semi-automated annotation, where 50% of the annotation decisions were performed automatically, yielded cover estimate errors comparable to those of the human experts. Furthermore, fully-automated annotation yielded rapid, unbiased cover estimates but with increased variance. These results show that automated annotation can increase spatial coverage and decrease time and financial outlay for image-based reef surveys.


Assuntos
Recifes de Corais , Monitoramento Ambiental/métodos , Processamento de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão , Alga Marinha/fisiologia , Algoritmos , Animais , Antozoários , Mudança Climática , Ecossistema , Humanos , Modelos Estatísticos , Variações Dependentes do Observador , Reprodutibilidade dos Testes
8.
Environ Monit Assess ; 187(8): 496, 2015 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-26156316

RESUMO

Size and growth rates for individual colonies are some of the most essential descriptive parameters for understanding coral communities, which are currently experiencing worldwide declines in health and extent. Accurately measuring coral colony size and changes over multiple years can reveal demographic, growth, or mortality patterns often not apparent from short-term observations and can expose environmental stress responses that may take years to manifest. Describing community size structure can reveal population dynamics patterns, such as periods of failed recruitment or patterns of colony fission, which have implications for the future sustainability of these ecosystems. However, rapidly and non-invasively measuring coral colony sizes in situ remains a difficult task, as three-dimensional underwater digital reconstruction methods are currently not practical for large numbers of colonies. Two-dimensional (2D) planar area measurements from projection of underwater photographs are a practical size proxy, although this method presents operational difficulties in obtaining well-controlled photographs in the highly rugose environment of the coral reef, and requires extensive time for image processing. Here, we present and test the measurement variance for a method of making rapid planar area estimates of small to medium-sized coral colonies using a lightweight monopod image-framing system and a custom semi-automated image segmentation analysis program. This method demonstrated a coefficient of variation of 2.26% for repeated measurements in realistic ocean conditions, a level of error appropriate for rapid, inexpensive field studies of coral size structure, inferring change in colony size over time, or measuring bleaching or disease extent of large numbers of individual colonies.


Assuntos
Antozoários/fisiologia , Recifes de Corais , Monitoramento Ambiental/métodos , Animais , Antozoários/crescimento & desenvolvimento , Processamento de Imagem Assistida por Computador/métodos , Fotografação/instrumentação , Dinâmica Populacional
9.
Sci Rep ; 5: 7694, 2015 Jan 13.
Artigo em Inglês | MEDLINE | ID: mdl-25582836

RESUMO

Coral reefs globally are declining rapidly because of both local and global stressors. Improved monitoring tools are urgently needed to understand the changes that are occurring at appropriate temporal and spatial scales. Coral fluorescence imaging tools have the potential to improve both ecological and physiological assessments. Although fluorescence imaging is regularly used for laboratory studies of corals, it has not yet been used for large-scale in situ assessments. Current obstacles to effective underwater fluorescence surveying include limited field-of-view due to low camera sensitivity, the need for nighttime deployment because of ambient light contamination, and the need for custom multispectral narrow band imaging systems to separate the signal into meaningful fluorescence bands. Here we describe the Fluorescence Imaging System (FluorIS), based on a consumer camera modified for greatly increased sensitivity to chlorophyll-a fluorescence, and we show high spectral correlation between acquired images and in situ spectrometer measurements. This system greatly facilitates underwater wide field-of-view fluorophore surveying during both night and day, and potentially enables improvements in semi-automated segmentation of live corals in coral reef photographs and juvenile coral surveys.


Assuntos
Recifes de Corais , Imageamento Tridimensional , Espectrometria de Fluorescência/métodos , Animais , Automação , Luz , Panamá , Polinésia
10.
Invest Ophthalmol Vis Sci ; 55(3): 1684-95, 2014 Mar 19.
Artigo em Inglês | MEDLINE | ID: mdl-24519427

RESUMO

PURPOSE: We evaluated three new pixelwise rates of retinal height changes (PixR) strategies to reduce false-positive errors while detecting glaucomatous progression. METHODS: Diagnostic accuracy of nonparametric PixR-NP cluster test (CT), PixR-NP single threshold test (STT), and parametric PixR-P STT were compared to statistic image mapping (SIM) using the Heidelberg Retina Tomograph. We included 36 progressing eyes, 210 nonprogressing patient eyes, and 21 longitudinal normal eyes from the University of California, San Diego (UCSD) Diagnostic Innovations in Glaucoma Study. Multiple comparison problem due to simultaneous testing of retinal locations was addressed in PixR-NP CT by controlling family-wise error rate (FWER) and in STT methods by Lehmann-Romano's k-FWER. For STT methods, progression was defined as an observed progression rate (ratio of number of pixels with significant rate of decrease; i.e., red-pixels, to disk size) > 2.5%. Progression criterion for CT and SIM methods was presence of one or more significant (P < 1%) red-pixel clusters within disk. RESULTS: Specificity in normals: CT = 81% (90%), PixR-NP STT = 90%, PixR-P STT = 90%, SIM = 90%. Sensitivity in progressing eyes: CT = 86% (86%), PixR-NP STT = 75%, PixR-P STT = 81%, SIM = 39%. Specificity in nonprogressing patient eyes: CT = 49% (55%), PixR-NP STT = 56%, PixR-P STT = 50%, SIM = 79%. Progression detected by PixR in nonprogressing patient eyes was associated with early signs of visual field change that did not yet meet our definition of glaucomatous progression. CONCLUSIONS: The PixR provided higher sensitivity in progressing eyes and similar specificity in normals than SIM, suggesting that PixR strategies can improve our ability to detect glaucomatous progression. Longer follow-up is necessary to determine whether nonprogressing eyes identified as progressing by these methods will develop glaucomatous progression. (ClinicalTrials.gov number, NCT00221897).


Assuntos
Erros de Diagnóstico/estatística & dados numéricos , Glaucoma de Ângulo Aberto/diagnóstico , Pressão Intraocular , Modelos Estatísticos , Retina/patologia , Idoso , Progressão da Doença , Feminino , Seguimentos , Glaucoma de Ângulo Aberto/fisiopatologia , Humanos , Masculino , Pessoa de Meia-Idade , Oftalmoscopia/métodos , Campos Visuais
11.
IEEE Trans Pattern Anal Mach Intell ; 35(12): 2930-40, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-24136431

RESUMO

We present a novel approach to localizing parts in images of human faces. The approach combines the output of local detectors with a nonparametric set of global models for the part locations based on over 1,000 hand-labeled exemplar images. By assuming that the global models generate the part locations as hidden variables, we derive a Bayesian objective function. This function is optimized using a consensus of models for these hidden variables. The resulting localizer handles a much wider range of expression, pose, lighting, and occlusion than prior ones. We show excellent performance on real-world face datasets such as Labeled Faces in the Wild (LFW) and a new Labeled Face Parts in the Wild (LFPW) and show that our localizer achieves state-of-the-art performance on the less challenging BioID dataset.


Assuntos
Algoritmos , Consenso , Teorema de Bayes , Face , Humanos , Modelos Teóricos , Reconhecimento Automatizado de Padrão , Reconhecimento Visual de Modelos
12.
Invest Ophthalmol Vis Sci ; 53(7): 3615-28, 2012 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-22491406

RESUMO

PURPOSE: To detect localized glaucomatous structural changes using proper orthogonal decomposition (POD) framework with false-positive control that minimizes confirmatory follow-ups, and to compare the results to topographic change analysis (TCA). METHODS: We included 167 participants (246 eyes) with ≥4 Heidelberg Retina Tomograph (HRT)-II exams from the Diagnostic Innovations in Glaucoma Study; 36 eyes progressed by stereo-photographs or visual fields. All other patient eyes (n = 210) were non-progressing. Specificities were evaluated using 21 normal eyes. Significance of change at each HRT superpixel between each follow-up and its nearest baseline (obtained using POD) was estimated using mixed-effects ANOVA. Locations with significant reduction in retinal height (red pixels) were determined using Bonferroni, Lehmann-Romano k-family-wise error rate (k-FWER), and Benjamini-Hochberg false discovery rate (FDR) type I error control procedures. Observed positive rate (OPR) in each follow-up was calculated as a ratio of number of red pixels within disk to disk size. Progression by POD was defined as one or more follow-ups with OPR greater than the anticipated false-positive rate. TCA was evaluated using the recently proposed liberal, moderate, and conservative progression criteria. RESULTS: Sensitivity in progressors, specificity in normals, and specificity in non-progressors, respectively, were POD-Bonferroni = 100%, 0%, and 0%; POD k-FWER = 78%, 86%, and 43%; POD-FDR = 78%, 86%, and 43%; POD k-FWER with retinal height change ≥50 µm = 61%, 95%, and 60%; TCA-liberal = 86%, 62%, and 21%; TCA-moderate = 53%, 100%, and 70%; and TCA-conservative = 17%, 100%, and 84%. CONCLUSIONS: With a stronger control of type I errors, k-FWER in POD framework minimized confirmatory follow-ups while providing diagnostic accuracy comparable to TCA. Thus, POD with k-FWER shows promise to reduce the number of confirmatory follow-ups required for clinical care and studies evaluating new glaucoma treatments. (ClinicalTrials.gov number, NCT00221897.).


Assuntos
Glaucoma/diagnóstico , Hipertensão Ocular/diagnóstico , Disco Óptico/patologia , Tomografia de Coerência Óptica/métodos , Campos Visuais , Adulto , Progressão da Doença , Seguimentos , Glaucoma/fisiopatologia , Humanos , Pressão Intraocular , Masculino , Pessoa de Meia-Idade , Hipertensão Ocular/fisiopatologia , Oftalmoscopia/métodos , Doenças do Nervo Óptico/diagnóstico , Reprodutibilidade dos Testes
14.
IEEE Trans Pattern Anal Mach Intell ; 28(2): 302-15, 2006 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-16468625

RESUMO

This paper addresses the problem of estimating the motion of a camera as it observes the outline (or apparent contour) of a solid bounded by a smooth surface in successive image frames. In this context, the surface points that project onto the outline of an object depend on the viewpoint and the only true correspondences between two outlines of the same object are the projections of frontier points where the viewing rays intersect in the tangent plane of the surface. In turn, the epipolar geometry is easily estimated once these correspondences have been identified. Given the apparent contours detected in an image sequence, a robust procedure based on RANSAC and a voting strategy is proposed to simultaneously estimate the camera configurations and a consistent set of frontier point projections by enforcing the redundancy of multiview epipolar geometry. The proposed approach is, in principle, applicable to orthographic, weak-perspective, and affine projection models. Experiments with nine real image sequences are presented for the orthographic projection case, including a quantitative comparison with the ground-truth data for the six data sets for which the latter information is available. Sample visual hulls have been computed from all image sequences for qualitative evaluation.


Assuntos
Algoritmos , Inteligência Artificial , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Movimento , Reconhecimento Automatizado de Padrão/métodos , Gravação em Vídeo/métodos , Armazenamento e Recuperação da Informação/métodos , Movimento (Física) , Técnica de Subtração
15.
Ultramicroscopy ; 104(1): 8-29, 2005 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-15935913

RESUMO

We present a completely automated algorithm for estimating the parameters of the contrast transfer function (CTF) of a transmission electron microscope. The primary contribution of this paper is the determination of the astigmatism prior to the estimation of the CTF parameters. The CTF parameter estimation is then reduced to a 1D problem using elliptical averaging. We have also implemented an automated method to calculate lower and upper cutoff frequencies to eliminate regions of the power spectrum which perturb the estimation of the CTF parameters. The algorithm comprises three optimization subproblems, two of which are proven to be convex. Results of the CTF estimation method are presented for images of carbon support films as well as for images of single particles embedded in ice and suspended over holes in the support film. A MATLAB implementation of the algorithm, called ACE, is freely available.


Assuntos
Aumento da Imagem/métodos , Microscopia Eletrônica de Transmissão/instrumentação , Microscopia Eletrônica de Transmissão/métodos , Algoritmos , Chaperonina 60/ultraestrutura , Processamento de Imagem Assistida por Computador
16.
IEEE Trans Pattern Anal Mach Intell ; 27(5): 684-98, 2005 May.
Artigo em Inglês | MEDLINE | ID: mdl-15875791

RESUMO

Previous work has demonstrated that the image variation of many objects (human faces in particular) under variable lighting can be effectively modeled by low-dimensional linear spaces, even when there are multiple light sources and shadowing. Basis images spanning this space are usually obtained in one of three ways: A large set of images of the object under different lighting conditions is acquired, and principal component analysis (PCA) is used to estimate a subspace. Alternatively, synthetic images are rendered from a 3D model (perhaps reconstructed from images) under point sources and, again, PCA is used to estimate a subspace. Finally, images rendered from a 3D model under diffuse lighting based on spherical harmonics are directly used as basis images. In this paper, we show how to arrange physical lighting so that the acquired images of each object can be directly used as the basis vectors of a low-dimensional linear space and that this subspace is close to those acquired by the other methods. More specifically, there exist configurations of k point light source directions, with k typically ranging from 5 to 9, such that, by taking k images of an object under these single sources, the resulting subspace is an effective representation for recognition under a wide range of lighting conditions. Since the subspace is generated directly from real images, potentially complex and/or brittle intermediate steps such as 3D reconstruction can be completely avoided; nor is it necessary to acquire large numbers of training images or to physically construct complex diffuse (harmonic) light fields. We validate the use of subspaces constructed in this fashion within the context of face recognition.


Assuntos
Algoritmos , Inteligência Artificial , Face/anatomia & histologia , Interpretação de Imagem Assistida por Computador/métodos , Iluminação , Modelos Biológicos , Reconhecimento Automatizado de Padrão/métodos , Humanos , Aumento da Imagem/métodos , Armazenamento e Recuperação da Informação/métodos , Modelos Lineares , Análise Numérica Assistida por Computador , Fotometria/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Processamento de Sinais Assistido por Computador
17.
J Struct Biol ; 145(1-2): 3-14, 2004.
Artigo em Inglês | MEDLINE | ID: mdl-15065668

RESUMO

Manual selection of single particles in images acquired using cryo-electron microscopy (cryoEM) will become a significant bottleneck when datasets of a hundred thousand or even a million particles are required for structure determination at near atomic resolution. Algorithm development of fully automated particle selection is thus an important research objective in the cryoEM field. A number of research groups are making promising new advances in this area. Evaluation of algorithms using a standard set of cryoEM images is an essential aspect of this algorithm development. With this goal in mind, a particle selection "bakeoff" was included in the program of the Multidisciplinary Workshop on Automatic Particle Selection for cryoEM. Twelve groups participated by submitting the results of testing their own algorithms on a common dataset. The dataset consisted of 82 defocus pairs of high-magnification micrographs, containing keyhole limpet hemocyanin particles, acquired using cryoEM. The results of the bakeoff are presented in this paper along with a summary of the discussion from the workshop. It was agreed that establishing benchmark particles and using bakeoffs to evaluate algorithms are useful in promoting algorithm development for fully automated particle selection, and that the infrastructure set up to support the bakeoff should be maintained and extended to include larger and more varied datasets, and more criteria for future evaluations.


Assuntos
Algoritmos , Microscopia Crioeletrônica/métodos , Processamento de Imagem Assistida por Computador/métodos , Animais , Processamento Eletrônico de Dados/métodos , Hemocianinas/química , Hemocianinas/ultraestrutura , Imageamento Tridimensional , Moluscos , Conformação Proteica
18.
J Struct Biol ; 145(1-2): 52-62, 2004.
Artigo em Inglês | MEDLINE | ID: mdl-15065673

RESUMO

A new learning-based approach is presented for particle detection in cryo-electron micrographs using the Adaboost learning algorithm. The approach builds directly on the successful detectors developed for the domain of face detection. It is a discriminative algorithm which learns important features of the particle's appearance using a set of training examples of the particles and a set of images that do not contain particles. The algorithm is fast (10 s on a 1.3 GHz Pentium M processor), is generic, and is not limited to any particular shape or size of the particle to be detected. The method has been evaluated on a publicly available dataset of 82 cryoEM images of keyhole lympet hemocyanin (KLH). From 998 automatically extracted particle images, the 3-D structure of KLH has been reconstructed at a resolution of 23.2 A which is the same resolution as obtained using particles manually selected by a trained user.


Assuntos
Inteligência Artificial , Microscopia Crioeletrônica/métodos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Animais , Processamento Eletrônico de Dados/métodos , Análise de Fourier , Hemocianinas/química , Hemocianinas/ultraestrutura , Imageamento Tridimensional , Modelos Moleculares , Moluscos , Tamanho da Partícula , Reconhecimento Automatizado de Padrão , Conformação Proteica , Curva ROC
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...